19 research outputs found

    Deep Learning Framework for Spleen Volume Estimation from 2D Cross-sectional Views

    Full text link
    Abnormal spleen enlargement (splenomegaly) is regarded as a clinical indicator for a range of conditions, including liver disease, cancer and blood diseases. While spleen length measured from ultrasound images is a commonly used surrogate for spleen size, spleen volume remains the gold standard metric for assessing splenomegaly and the severity of related clinical conditions. Computed tomography is the main imaging modality for measuring spleen volume, but it is less accessible in areas where there is a high prevalence of splenomegaly (e.g., the Global South). Our objective was to enable automated spleen volume measurement from 2D cross-sectional segmentations, which can be obtained from ultrasound imaging. In this study, we describe a variational autoencoder-based framework to measure spleen volume from single- or dual-view 2D spleen segmentations. We propose and evaluate three volume estimation methods within this framework. We also demonstrate how 95% confidence intervals of volume estimates can be produced to make our method more clinically useful. Our best model achieved mean relative volume accuracies of 86.62% and 92.58% for single- and dual-view segmentations, respectively, surpassing the performance of the clinical standard approach of linear regression using manual measurements and a comparative deep learning-based 2D-3D reconstruction-based approach. The proposed spleen volume estimation framework can be integrated into standard clinical workflows which currently use 2D ultrasound images to measure spleen length. To the best of our knowledge, this is the first work to achieve direct 3D spleen volume estimation from 2D spleen segmentations.Comment: 22 pages, 7 figure

    Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images

    Full text link
    Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy artificial intelligence (AI) models beyond conventional reporting of performance metrics. When considering their role in a clinical decision support setting, AI classification models should ideally avoid confident wrong predictions and maximise the confidence of correct predictions. Models that do this are said to be well-calibrated with regard to confidence. However, relatively little attention has been paid to how to improve calibration when training these models, i.e., to make the training strategy uncertainty-aware. In this work we evaluate three novel uncertainty-aware training strategies comparing against two state-of-the-art approaches. We analyse performance on two different clinical applications: cardiac resynchronisation therapy (CRT) response prediction and coronary artery disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The best-performing model in terms of both classification accuracy and the most common calibration measure, expected calibration error (ECE) was the Confidence Weight method, a novel approach that weights the loss of samples to explicitly penalise confident incorrect predictions. The method reduced the ECE by 17% for CRT response prediction and by 22% for CAD diagnosis when compared to a baseline classifier in which no uncertainty-aware strategy was included. In both applications, as well as reducing the ECE there was a slight increase in accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD diagnosis respectively. However, our analysis showed a lack of consistency in terms of optimal models when using different calibration measures. This indicates the need for careful consideration of performance metrics when training and selecting models for complex high-risk applications in healthcare

    Druggable proteins influencing cardiac structure and function: Implications for heart failure therapies and cancer cardiotoxicity

    Get PDF
    Dysfunction of either the right or left ventricle can lead to heart failure (HF) and subsequent morbidity and mortality. We performed a genome-wide association study (GWAS) of 16 cardiac magnetic resonance (CMR) imaging measurements of biventricular function and structure. Cis-Mendelian randomization (MR) was used to identify plasma proteins associating with CMR traits as well as with any of the following cardiac outcomes: HF, non-ischemic cardiomyopathy, dilated cardiomyopathy (DCM), atrial fibrillation, or coronary heart disease. In total, 33 plasma proteins were prioritized, including repurposing candidates for DCM and/or HF: IL18R (providing indirect evidence for IL18), I17RA, GPC5, LAMC2, PA2GA, CD33, and SLAF7. In addition, 13 of the 25 druggable proteins (52%; 95% confidence interval, 0.31 to 0.72) could be mapped to compounds with known oncological indications or side effects. These findings provide leads to facilitate drug development for cardiac disease and suggest that cardiotoxicities of several cancer treatments might represent mechanism-based adverse effects

    Localització i segmentació automàtica del ventricle esquerre en imatges ecocardíaques

    No full text
    [ANGLÈS] Echocardiography is a common non-invasive diagnostic image modality that uses ultrasound to capture the structure and the function of the heart. During the last years there has been a growing need to automate the process of cardiac ultrasound images, involves many tasks, which such as image view classification, wall motion analysis, automatic placement of the Doppler gate over the valves, etc. Specially, the delineation of the left ventricle of the heart in ultrasound data is an important tool to produce a quantitative assessment of the health of the heart. In this study, we propose a processing chain for the localisation and segmentation of the left ventricle in 2D-echocardiography in apical 2 chamber, 3 chamber and 4 chamber views. The system is built based on a machine learning approach that extracts knowledge from an annotated database. To reduce the complexity of the problem it has been divided into two parts, a pose estimation of the left ventricle and a non-rigid segmentation of the contour. The pose estimation problem is presented as a binary classification problem based on Boosting algorithms with Haar-like features. The main idea of Boosting is to combine the output of several weak classifiers to produce a powerful decision making committee. The non-rigid segmentation is based on a cascade regression framework, where every regressor learns the relationship between the local neighbourhood and the displacement from the true feature location in order to progressively refine the shape initialization given by the rigid detection.[CASTELLÀ] La ecocardiografía cardiaca es un método comúnmente utilizado para el diagnóstico de patologías cardiacas. Durante los últimos años ha habido una creciente necesidad de automatizar el proceso de tratamiento de las imágenes ecocardiográficas, que incluye muchas tareas, tales como clasificación del tipo de vista, análisis del movimiento de la pared cardiaca, localización automática de la puerta Doppler, etc. En concreto, la delineación automática del ventrículo izquierdo es una herramienta muy importante para producir una evaluación cuantitativa de la salud del corazón, para este fin durante mi proyecto final de carrera he creado un algoritmo capaz de localizar y segmentar el ventrículo izquierdo en imágenes ecocardiográficas 2D en vista apical 2 cavidades, 3 cavidades y 4 cavidades. El sistema de procesado esta basado sobre los algoritmos de aprendizaje supervisado, los cuales extraen la información a partir de una base de datos anotada. Para reducir la complejidad del problema lo he dividido en dos partes, una primera parte dedicada a localización del ventrículo izquierdo y una segunda parte dedicada a la segmentación del contorno del ventrículo izquierdo. He abordado el problema de localización como un problema de clasificación binaria basada en los algoritmo de Boosting con descriptores de Haar. La idea principal del algoritmo del Boosting es de combinar la salida de varios clasificadores débiles para producir un clasificador fuerte. Para la segunda parte del proyecto he realizado un análisis de regresión para estimar la relación entre la información geométrica del contorno y la información de textura.[CATALÀ] L’ecocardiografia cardíaca és una tècnica habitualment utilitzada pel diagnòstic de patologies cardíaques. En els darrers anys ha hagut una creixent necessitat d’automatitzar el procés de tractament de les imatges ecocardíaques, que inclouen moltes tasques, com la classificació del tipus de vista, l'anàlisi del moviment de la paret cardíaca la localització automàtica de la porta Doppler, etc. En concret, la delineació automàtica del contorn del ventricle esquerre és molt important per produir una avaluació quantitativa de la salut del cor, amb aquesta finalitat durant el projecte de fi de carrera he creat un algoritme que localitza i segmenta automàticament el ventricle esquerre en imatges ecocardíaques 2D en vista apical 2 cavitats, 3 cavitats i 4 cavitats. La cadena de tractament proposada utilitza algoritmes d’aprenentatge supervisat els quals utilitzen una base de dades anotada per crear automàticament un model. Per reduir la complexitat de la problemàtica he dividit el problema en dos parts, una primera part que localitza el ventricle esquerre i una segona part que el segmenta. Per a la primera part he utilitzat un classificador binari tipus Boosting, la idea principal del Boosting és de combinar molts classificadors febles per crear un classificador fort. Per a la segona part he realitzat una anàlisi de regressió per estimar la relació entre la informació geomètrica del contorn i la informació sobre la textura
    corecore